Date: Wed, 6 Nov 1996 16:45:52 -0500 From: Bruce Schneier Subject: Why cryptography is harder than it looks >From e-mail to cellular communications, from secure Web access to digital cash, cryptography is an essential part of today's information systems. Cryptography helps provide accountability, fairness, accuracy, and confidentiality. It can prevent fraud in electronic commerce and assure the validity of financial transactions. Used properly, it protects your anonymity and prove your identity. It can keep vandals from altering your Web page and prevent industrial competitors from reading your confidential documents. And in the future, as commerce and communications continue to move to computer networks, cryptography will become more and more vital. But the cryptography now on the market doesn't provide the level of security it advertises. Most systems are designed and implemented not by cryptographers, but by engineers who think cryptography is like any other computer technology. It's not. You can't make systems secure by tacking on cryptography as an afterthought. You have to know what you are doing every step of the way, from conception through installation. Billions of dollars are spent on computer security, and most of it wasted on insecure products. After all, weak cryptography looks the same on the shelf as strong cryptography. Two e-mail encryption products may have almost the same user interface, yet one is secure while the other permits eavesdropping. A feature comparison chart may suggest that two programs have similar features, although one has gaping security holes that the other doesn't. An experienced cryptographer can tell the difference. So can a thief.Present-day computer security is a house of cards; it may stand for now, but it can't last. Many insecure products have not yet been broken because they are still in their infancy. But as these products become more and more widely used, they will become tempting targets for criminals. The press will publicize the attacks, undermining public confidence in these systems. Ultimately, products will win or lose in the marketplace depending on the strength of their security. Threats to computer systems Every form of commerce ever invented has been subject to fraud, from rigged scales in a farmers' market to counterfeit currency to phony invoices. Electronic commerce schemes will also face fraud, through forgery, misrepresentation, denial of service, and cheating. You can't walk the streets wearing a mask of someone else's face, but in the digital world it is easy to impersonate others. In fact, computerization makes the risks even greater, by allowing automated and systematic attacks that are impossible against non-automated systems. A thief can make a living skimming a penny from every Visa cardholder. Only strong cryptography can protect against these attacks. Privacy violations are another threat. Some attacks on privacy are targeted: a member of the press tries to read a public figure's e-mail, or a company tries to intercept a competitor's communications. Others are broad data-harvesting attacks, searching a sea of data for interesting information: a list of rich widows, AZT users, or people who view a particular Web page. Electronic vandalism is an increasingly serious problem. Already computer vandals have graffitied the CIA's web page, mail-bombed Internet providers, and canceled thousands of newsgroup messages. And of course, vandals and thieves routinely break into networked computer systems. When security safeguards aren't adequate, trespassers run little risk of getting caught. Attackers don't follow rules. They can attack a system using techniques not anticipated by the original designers they cheat. In California, art thieves burgle homes by cutting through the walls with a chain saw. Sophisticated, expensive home security systems don't stand a chance against this sort of attack. Computer thieves come through the walls too. They steal technical data, bribe insiders, modify software, and collude. They take advantage of technologies newer than the system, and even invent new mathematics to attack the system with. Attackers also have more time; it's unusual for a good guy to disassemble and examine a public system. SecurID stayed around for years before anyone looked at their key management, and they didn't even strip their binaries. And the odds favor the attacker: defenders have to protect against every possible vulnerability, but an attacker only has to find one security flaw to compromise the whole system. What cryptography can and can't do No one can guarantee 100% security. But we can work toward 100% risk acceptance. Fraud exists in current commerce systems: cash can be counterfeited, checks altered, credit card numbers stolen. Yet these systems are still successful because the benefits and conveniences outweigh the losses. Privacy systems-wall safes, door locks, curtains-are not perfect, but they're often good enough. A good cryptographic system strikes a balance between what is possible and what is acceptable. Strong cryptography can successfully withstand targeted attacks up to a point-the point at which it becomes easier to get the information some other way. A computer encryption program, no matter how good, will not prevent an attacker from going through someone's garbage. But it can absolutely prevent data-harvesting attacks; no attacker can go through enough trash to find every AZT user in the country. The good news about cryptography is that we already have the algorithms and protocols we need to secure our systems. The bad news is that that was the easy part; successful implementation requires considerable expertise. The areas of security that interact with people-key management, human/computer interface security, access control-often defy analysis. And the disciplines of public-key infrastructure, software security, computer security, network security, and tamper-resistant hardware design are very poorly understood. Companies often get the easy part wrong and implement insecure algorithms and protocols. But even so, practical cryptography is rarely broken through the mathematics; other parts of systems are much easier to break. The best protocol ever invented can fall to an easy attack if no one pays attention to the more complex and subtle implementation issues. Netscape's security fell to a bug in the random-number generator. Flaws can be anywhere: the threat model, the system design, the software or hardware implementation, the system management. Security is a chain, and a single weak link can break the entire system. Fatal bugs may be far removed from the security portion of the software; a design decision that has nothing to do with security can nonetheless create a security flaw. Once you find a security flaw, you can fix it. But finding the flaws to begin with can be incredibly difficult. Security is different from any other design requirement, because functionality does not equal quality. If a word processor prints successfully, you know that the print function works. Security is different; just because a safe recognizes the correct combination does not mean that its contents are secure from a safecracker. No amount of general beta testing will reveal a security flaw, and there's no test possible that can prove the absence of flaws. Threat models A good design starts with a threat model: what the system is designed to protect, from whom, and for how long? The threat model must take the entire system into account: not just the data to be protected, but the people who will use the system and how they will use it What motivates the attackers? What kinds of abuses can be tolerated? Must attacks be prevented, or can they just be detected? If the worst happens and one of the fundamental security assumptions of a system is broken, what kind of disaster recovery is possible? The answers to these questions can't be standardized; they're different for every system. Too often, designers don't take the time to build accurate threat models or analyze the real risks. Threat models allow both product designers and consumers to determine what security measures they need. Does it makes sense to encrypt your hard drive if you don't put your files in a safe? How can someone inside the company defraud the commerce system? What exactly is the cost of defeating the tamper-resistance on the smart card? You can't design a secure system unless you understand what it has to be secure against. System design System design should only begin after you understand the threat model. This design work is the mainstay of the science of cryptography, and it is very specialized. Cryptography blends several areas of mathematics: number theory, complexity theory, information theory, probability theory, abstract algebra, and formal analysis, among others. Few can do the science properly, and a little knowledge is a dangerous thing: inexperienced cryptographers almost always design flawed systems. Good cryptographers know that nothing substitutes for extensive peer review and years of analysis. Quality systems use published and well-understood algorithms and protocols; to use unpublished or unproven elements in a design is risky at best. Cryptographic system design is also an art. A designer must strike a balance between security and accessibility, anonymity and accountability, privacy and availability. Science alone cannot prove security; only experience, and the intuition born of experience, can help the cryptographer design secure systems and find flaws in existing designs. Good security systems are made up of small, verifiable (and verified!) chunks, each of which provides some service that clearly reduces to a primitive, such as the difficulty of forging a certain hash function. There are a lot of big systems out there (DCE, for example) which are just too big to verify in a reasonable amount of time. Implementation There is an enormous difference between a mathematical algorithm and its concrete implementation in hardware or software. Cryptographic system designs are fragile. Just because a protocol is logically secure doesn't mean it will stay secure when a designer starts defining message structures and passing bits around. Close isn't close enough; these systems must be implemented exactly, perfectly, or they will fail. A poorly-designed user interface can make a hard-drive encryption program completely insecure. A bad clock interface can leave a gaping hole in a communications security program. A false reliance on tamper-resistant hardware can render an electronic commerce system all but useless. Since these mistakes aren't apparent in testing, they end up in finished products. mplementers are under pressure from budgets and deadlines. They make the same mistakes over and over again, in many different products. They use bad random-number generators, don't check properly for error conditions, and leave secret information in swap files. Many of these flaws cannot be studied in the scientific literature because they are not technically interesting. The only way to learn how to prevent these flaws is to make and break systems, again and again. Procedures and management In the end, many security systems are broken by the people who use them; most fraud against commerce systems is perpetrated by insiders. Honest users cause problems too, because they usually don't care about security. They want simplicity, convenience, and compatibility with existing (insecure) systems. They choose bad passwords, write them down, give friends and relatives their private keys, leave computers logged in, and so on. It's hard to sell door locks to people who don't want to be bothered with keys. A well-designed system must take people into account, and people are often the hardest factor to design around. This is where you find the real cost of security. It's not in the algorithms; strong cryptography is no more expensive than weak cryptography. It's not even in the design or the implementation; a good system, while expensive to build and verify, is far cheaper than the losses from an insecure system. The cost is in getting people to use it. It's hard to convince consumers that their financial privacy is important when they are willing to leave a detailed purchase record in exchange for one thousandth of a free trip to Hawaii. It's hard to build a system that provides strong authentication on top of systems that can be penetrated by knowing someone's mother's maiden name. Security is routinely bypassed by store clerks, seniortop executives, and anyone else who just needs to get the job done. Even when users do understand the need for strong security, they have no way of comparing systems. Computer magazines compare security products by listing their features, not by evaluating their security. Marketing literature makes claims that are just not true; a competing product that is more secure and more expensive will only fare worse in the market. People rely on the government to look out for their safety and security in areas where they lack the knowledge to make evaluations-food packaging, aviation, medicine. For cryptography, the U.S. government is doing just the opposite. Tomorrow's problems When an airplane crashes, there are inquiries, analyses, and reports. Information is widely disseminated, and everyone learns from the failure. You can read a complete record of airline accidents from the beginning of commercial aviation. When a bank's electronic commerce system is breached and defrauded, it's usually covered up. If it does make the newspapers, details are omitted. No one analyzes the attack; no one learns from the mistake. The bank tries to patch things in secret, hoping that the public won't lose confidence in a system that deserves no confidence. It's no longer good enough to install security patches in response to attacks. Computer systems move too quickly; a security flaw described on the Internet can be exploited by thousands in a day. Today's systems must anticipate future attacks. Any comprehensive system-whether for authenticated communications, secure data storage, or electronic commerce-is likely to remain in use for five years or more. To remain secure, it must be able to withstand the future: smarter attackers, more computational power, and greater incentives to subvert a widespread system. There won't be time to upgrade them in the field. History has taught us: never underestimate the amount of money, time, and effort someone will expend to thwart a security system. Use orthogonal defense systems: different ways of doing the same thing. Secure authentication might mean digital signatures on the desktop, SSL protecting the incoming transmission, and IPsec from the firewall to the back end, along with multiple audit points along the way for recovery and evidence. Breaking parts of it gives an attacker a wedge, but doesn't cause the whole system to collapse. It's always better to assume the worst. Assume your adversaries are better than they are. Assume science and technology will soon be able to do things they cannot yet. Give yourself a margin for error. Give yourself more security than you need today. When the unexpected happens, you'll be glad you did. Bruce Schneier, Counterpane Systems Author of APPLIED CRYPTOGRAPHY